74 research outputs found
Recommended from our members
Functional organization of human sensorimotor cortex for speech articulation.
Speaking is one of the most complex actions that we perform, but nearly all of us learn to do it effortlessly. Production of fluent speech requires the precise, coordinated movement of multiple articulators (for example, the lips, jaw, tongue and larynx) over rapid time scales. Here we used high-resolution, multi-electrode cortical recordings during the production of consonant-vowel syllables to determine the organization of speech sensorimotor cortex in humans. We found speech-articulator representations that are arranged somatotopically on ventral pre- and post-central gyri, and that partially overlap at individual electrodes. These representations were coordinated temporally as sequences during syllable production. Spatial patterns of cortical activity showed an emergent, population-level representation, which was organized by phonetic features. Over tens of milliseconds, the spatial patterns transitioned between distinct representations for different consonants and vowels. These results reveal the dynamic organization of speech sensorimotor cortex during the generation of multi-articulator movements that underlies our ability to speak
Robust, automated sleep scoring by a compact neural network with distributional shift correction.
Studying the biology of sleep requires the accurate assessment of the state of experimental subjects, and manual analysis of relevant data is a major bottleneck. Recently, deep learning applied to electroencephalogram and electromyogram data has shown great promise as a sleep scoring method, approaching the limits of inter-rater reliability. As with any machine learning algorithm, the inputs to a sleep scoring classifier are typically standardized in order to remove distributional shift caused by variability in the signal collection process. However, in scientific data, experimental manipulations introduce variability that should not be removed. For example, in sleep scoring, the fraction of time spent in each arousal state can vary between control and experimental subjects. We introduce a standardization method, mixture z-scoring, that preserves this crucial form of distributional shift. Using both a simulated experiment and mouse in vivo data, we demonstrate that a common standardization method used by state-of-the-art sleep scoring algorithms introduces systematic bias, but that mixture z-scoring does not. We present a free, open-source user interface that uses a compact neural network and mixture z-scoring to allow for rapid sleep scoring with accuracy that compares well to contemporary methods. This work provides a set of computational tools for the robust automation of sleep scoring
Deep learning as a tool for neural data analysis: Speech classification and cross-frequency coupling in human sensorimotor cortex.
A fundamental challenge in neuroscience is to understand what structure in the world is represented in spatially distributed patterns of neural activity from multiple single-trial measurements. This is often accomplished by learning a simple, linear transformations between neural features and features of the sensory stimuli or motor task. While successful in some early sensory processing areas, linear mappings are unlikely to be ideal tools for elucidating nonlinear, hierarchical representations of higher-order brain areas during complex tasks, such as the production of speech by humans. Here, we apply deep networks to predict produced speech syllables from a dataset of high gamma cortical surface electric potentials recorded from human sensorimotor cortex. We find that deep networks had higher decoding prediction accuracy compared to baseline models. Having established that deep networks extract more task relevant information from neural data sets relative to linear models (i.e., higher predictive accuracy), we next sought to demonstrate their utility as a data analysis tool for neuroscience. We first show that deep network's confusions revealed hierarchical latent structure in the neural data, which recapitulated the underlying articulatory nature of speech motor control. We next broadened the frequency features beyond high-gamma and identified a novel high-gamma-to-beta coupling during speech production. Finally, we used deep networks to compare task-relevant information in different neural frequency bands, and found that the high-gamma band contains the vast majority of information relevant for the speech prediction task, with little-to-no additional contribution from lower-frequency amplitudes. Together, these results demonstrate the utility of deep networks as a data analysis tool for basic and applied neuroscience
AutoCT: Automated CT registration, segmentation, and quantification
The processing and analysis of computed tomography (CT) imaging is important
for both basic scientific development and clinical applications. In AutoCT, we
provide a comprehensive pipeline that integrates an end-to-end automatic
preprocessing, registration, segmentation, and quantitative analysis of 3D CT
scans. The engineered pipeline enables atlas-based CT segmentation and
quantification leveraging diffeomorphic transformations through efficient
forward and inverse mappings. The extracted localized features from the
deformation field allow for downstream statistical learning that may facilitate
medical diagnostics. On a lightweight and portable software platform, AutoCT
provides a new toolkit for the CT imaging community to underpin the deployment
of artificial intelligence-driven applications
An adapting auditory-motor feedback loop can contribute to generating vocal repetition
Consecutive repetition of actions is common in behavioral sequences. Although
integration of sensory feedback with internal motor programs is important for
sequence generation, if and how feedback contributes to repetitive actions is
poorly understood. Here we study how auditory feedback contributes to
generating repetitive syllable sequences in songbirds. We propose that auditory
signals provide positive feedback to ongoing motor commands, but this influence
decays as feedback weakens from response adaptation during syllable
repetitions. Computational models show that this mechanism explains repeat
distributions observed in Bengalese finch song. We experimentally confirmed two
predictions of this mechanism in Bengalese finches: removal of auditory
feedback by deafening reduces syllable repetitions; and neural responses to
auditory playback of repeated syllable sequences gradually adapt in
sensory-motor nucleus HVC. Together, our results implicate a positive
auditory-feedback loop with adaptation in generating repetitive vocalizations,
and suggest sensory adaptation is important for feedback control of motor
sequences
- …